1,122 research outputs found

    Stellar Parameters and Elemental Abundances of Late-G Giants

    Full text link
    The properties of 322 intermediate-mass late-G giants (comprising 10 planet-host stars) selected as the targets of Okayama Planet Search Program, many of which are red-clump giants, were comprehensively investigated by establishing their various stellar parameters (atmospheric parameters including turbulent velocity fields, metallicity, luminosity, mass, age, projected rotational velocity, etc.), and their photospheric chemical abundances for 17 elements, in order to study their mutual dependence, connection with the existence of planets, and possible evolution-related characteristics. The metallicity distribution of planet-host giants was found to be almost the same as that of non-planet-host giants, making marked contrast to the case of planet-host dwarfs tending to be metal-rich. Generally, the metallicities of these comparatively young (typical age of ~10^9 yr) giants tend to be somewhat lower than those of dwarfs at the same age, and super-metal-rich ([Fe/H] > 0.2) giants appear to be lacking. Apparent correlations were found between the abundances of C, O, and Na, suggesting that the surface compositions of these elements have undergone appreciable changes due to dredge-up of H-burning products by evolution-induced deep envelope mixing which becomes more efficient for higher-mass stars.Comment: Accepted for publication in PASJ (21 pages, 15 figures) (wrong URL of e-tables in Ver.1 has been corrected in Ver.2

    Fast Multi-frame Stereo Scene Flow with Motion Segmentation

    Full text link
    We propose a new multi-frame method for efficiently computing scene flow (dense depth and optical flow) and camera ego-motion for a dynamic scene observed from a moving stereo camera rig. Our technique also segments out moving objects from the rigid scene. In our method, we first estimate the disparity map and the 6-DOF camera motion using stereo matching and visual odometry. We then identify regions inconsistent with the estimated camera motion and compute per-pixel optical flow only at these regions. This flow proposal is fused with the camera motion-based flow proposal using fusion moves to obtain the final optical flow and motion segmentation. This unified framework benefits all four tasks - stereo, optical flow, visual odometry and motion segmentation leading to overall higher accuracy and efficiency. Our method is currently ranked third on the KITTI 2015 scene flow benchmark. Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3 orders of magnitude faster than the top six methods. We also report a thorough evaluation on challenging Sintel sequences with fast camera and object motion, where our method consistently outperforms OSF [Menze and Geiger, 2015], which is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo Scene Flow Benchmark in November 201

    Future Person Localization in First-Person Videos

    Full text link
    We present a new task that predicts future locations of people observed in first-person videos. Consider a first-person video stream continuously recorded by a wearable camera. Given a short clip of a person that is extracted from the complete stream, we aim to predict that person's location in future frames. To facilitate this future person localization ability, we make the following three key observations: a) First-person videos typically involve significant ego-motion which greatly affects the location of the target person in future frames; b) Scales of the target person act as a salient cue to estimate a perspective effect in first-person videos; c) First-person videos often capture people up-close, making it easier to leverage target poses (e.g., where they look) for predicting their future locations. We incorporate these three observations into a prediction framework with a multi-stream convolution-deconvolution architecture. Experimental results reveal our method to be effective on our new dataset as well as on a public social interaction dataset.Comment: Accepted to CVPR 201
    • …
    corecore